skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Lopez‐Gomez, Ignacio"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Heatwaves are projected to increase in frequency and severity with global warming. Improved warning systems would help reduce the associated loss of lives, wildfires, power disruptions, and reduction in crop yields. In this work, we explore the potential for deep learning systems trained on historical data to forecast extreme heat on short, medium and subseasonal time scales. To this purpose, we train a set of neural weather models (NWMs) with convolutional architectures to forecast surface temperature anomalies globally, 1 to 28 days ahead, at ∼200-km resolution and on the cubed sphere. The NWMs are trained using the ERA5 reanalysis product and a set of candidate loss functions, including the mean-square error and exponential losses targeting extremes. We find that training models to minimize custom losses tailored to emphasize extremes leads to significant skill improvements in the heatwave prediction task, relative to NWMs trained on the mean-square-error loss. This improvement is accomplished with almost no skill reduction in the general temperature prediction task, and it can be efficiently realized through transfer learning, by retraining NWMs with the custom losses for a few epochs. In addition, we find that the use of a symmetric exponential loss reduces the smoothing of NWM forecasts with lead time. Our best NWM is able to outperform persistence in a regressive sense for all lead times and temperature anomaly thresholds considered, and shows positive regressive skill relative to the ECMWF subseasonal-to-seasonal control forecast after 2 weeks. Significance StatementHeatwaves are projected to become stronger and more frequent as a result of global warming. Accurate forecasting of these events would enable the implementation of effective mitigation strategies. Here we analyze the forecast accuracy of artificial intelligence systems trained on historical surface temperature data to predict extreme heat events globally, 1 to 28 days ahead. We find that artificial intelligence systems trained to focus on extreme temperatures are significantly more accurate at predicting heatwaves than systems trained to minimize errors in surface temperatures and remain equally skillful at predicting moderate temperatures. Furthermore, the extreme-focused systems compete with state-of-the-art physics-based forecast systems in the subseasonal range, while incurring a much lower computational cost. 
    more » « less
  2. Abstract This work integrates machine learning into an atmospheric parameterization to target uncertain mixing processes while maintaining interpretable, predictive, and well‐established physical equations. We adopt an eddy‐diffusivity mass‐flux (EDMF) parameterization for the unified modeling of various convective and turbulent regimes. To avoid drift and instability that plague offline‐trained machine learning parameterizations that are subsequently coupled with climate models, we frame learning as an inverse problem: Data‐driven models are embedded within the EDMF parameterization and trained online in a one‐dimensional vertical global climate model (GCM) column. Training is performed against output from large‐eddy simulations (LES) forced with GCM‐simulated large‐scale conditions in the Pacific. Rather than optimizing subgrid‐scale tendencies, our framework directly targets climate variables of interest, such as the vertical profiles of entropy and liquid water path. Specifically, we use ensemble Kalman inversion to simultaneously calibrate both the EDMF parameters and the parameters governing data‐driven lateral mixing rates. The calibrated parameterization outperforms existing EDMF schemes, particularly in tropical and subtropical locations of the present climate, and maintains high fidelity in simulating shallow cumulus and stratocumulus regimes under increased sea surface temperatures from AMIP4K experiments. The results showcase the advantage of physically constraining data‐driven models and directly targeting relevant variables through online learning to build robust and stable machine learning parameterizations. 
    more » « less
  3. ver the last 10,000 years, human activities have transformed Earth through farming, forestry, mining, and industry. The complex results of these activities are now observed and quantified as “human impacts” on Earth’s atmosphere, oceans, biosphere, and geochemistry. While myriad studies have explored facets of human impacts on the planet, they are necessarily technical and often highly focused. Thus, finding reliable quantitative information requires a significant investment of time to assess each quantity and associated uncertainty. We present the Human Impacts Database (www.anthroponumbers.org), which houses a diverse array of such quantities. We review a subset of these values and how they help build intuition for understanding the Earth-human system. While collation alone does not tell us how to best ameliorate human impacts, we contend that any future plans should be made in light of a quantitative understanding of the interconnected ways in which humans influence the planet. 
    more » « less
  4. Abstract Most machine learning applications in Earth system modeling currently rely on gradient‐based supervised learning. This imposes stringent constraints on the nature of the data used for training (typically, residual time tendencies are needed), and it complicates learning about the interactions between machine‐learned parameterizations and other components of an Earth system model. Approaching learning about process‐based parameterizations as an inverse problem resolves many of these issues, since it allows parameterizations to be trained with partial observations or statistics that directly relate to quantities of interest in long‐term climate projections. Here, we demonstrate the effectiveness of Kalman inversion methods in treating learning about parameterizations as an inverse problem. We consider two different algorithms: unscented and ensemble Kalman inversion. Both methods involve highly parallelizable forward model evaluations, converge exponentially fast, and do not require gradient computations. In addition, unscented Kalman inversion provides a measure of parameter uncertainty. We illustrate how training parameterizations can be posed as a regularized inverse problem and solved by ensemble Kalman methods through the calibration of an eddy‐diffusivity mass‐flux scheme for subgrid‐scale turbulence and convection, using data generated by large‐eddy simulations. We find the algorithms amenable to batching strategies, robust to noise and model failures, and efficient in the calibration of hybrid parameterizations that can include empirical closures and neural networks. 
    more » « less
  5. Abstract Because of their limited spatial resolution, numerical weather prediction and climate models have to rely on parameterizations to represent atmospheric turbulence and convection. Historically, largely independent approaches have been used to represent boundary layer turbulence and convection, neglecting important interactions at the subgrid scale. Here we build on an eddy‐diffusivity mass‐flux (EDMF) scheme that represents all subgrid‐scale mixing in a unified manner, partitioning subgrid‐scale fluctuations into contributions from local diffusive mixing and coherent advective structures and allowing them to interact within a single framework. The EDMF scheme requires closures for the interaction between the turbulent environment and the plumes and for local mixing. A second‐order equation for turbulence kinetic energy (TKE) provides one ingredient for the diffusive local mixing closure, leaving a mixing length to be parameterized. Here, we propose a new mixing length formulation, based on constraints derived from the TKE balance. It expresses local mixing in terms of the same physical processes in all regimes of boundary layer flow. The formulation is tested at a range of resolutions and across a wide range of boundary layer regimes, including a stably stratified boundary layer, a stratocumulus‐topped marine boundary layer, and dry convection. Comparison with large eddy simulations (LES) shows that the EDMF scheme with this diffusive mixing parameterization accurately captures the structure of the boundary layer and clouds in all cases considered. 
    more » « less
  6. Abstract We demonstrate that an extended eddy‐diffusivity mass‐flux (EDMF) scheme can be used as a unified parameterization of subgrid‐scale turbulence and convection across a range of dynamical regimes, from dry convective boundary layers, through shallow convection, to deep convection. Central to achieving this unified representation of subgrid‐scale motions are entrainment and detrainment closures. We model entrainment and detrainment rates as a combination of turbulent and dynamical processes. Turbulent entrainment/detrainment is represented as downgradient diffusion between plumes and their environment. Dynamical entrainment/detrainment is proportional to a ratio of a relative buoyancy of a plume and a vertical velocity scale, that is modulated by heuristic nondimensional functions which represent their relative magnitudes and the enhanced detrainment due to evaporation from clouds in drier environment. We first evaluate the closures offline against entrainment and detrainment rates diagnosed from large‐eddy simulations (LES) in which tracers are used to identify plumes, their turbulent environment, and mass and tracer exchanges between them. The LES are of canonical test cases of a dry convective boundary layer, shallow convection, and deep convection, thus spanning a broad range of regimes. We then compare the LES with the full EDMF scheme, including the new closures, in a single column model (SCM). The results show good agreement between the SCM and LES in quantities that are key for climate models, including thermodynamic profiles, cloud liquid water profiles, and profiles of higher moments of turbulent statistics. The SCM also captures well the diurnal cycle of convection and the onset of precipitation. 
    more » « less